997 resultados para Adapted image


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The scope of this study directs an investigation in search of how the blind person learns knowledge at school mediated by the image in context of an inclusive education and how it can be (or is) triggered by the adaptation of images to the tactile seizure of the blind person and his correlative process of reading. To achieve this intent we choose a qualitative approach of research and opted for the modality of case study, based on the empirical field of a public school in the city of Cruzeta, RN and as a the main subject a congenitally blind female student enrolled in high school there, focusing, often, on the discipline of geography in its words mapping. Our procedures for construction of data are directly involved to the documentary analysis of open reflective interview and observation. The base guiding theory of our assessments is located in the current understanding about the human psychological development of its educational process inside an inclusive perspective, of contemporary conceptions about the visual disability as well of image as a cultural product. Accordingly, the human person is a concrete subject, whose development is deeply marked by the culture, historically built by human society. This subject regardless of his specific features, grasping the world in an interactive and immediate way, internalising and producing culture. In this thinking, we believe that the blind person perceives in multiple senses the stimuli of his environment and acts in the world toward his integration into the social environment. The image as a product of culture, historically and socially determined, appears as a sign conventionally used as an icon that in itself concentrates knowledge of which the student who does not realize visually himself and his surroundings cannot be excluded. In this direction, the inclusive educational process must build conditions of access to knowledge for all students without distinction, including access to the interpretation of the images originally intended for the seizure strictly visual to other perceptive models. Based in this theory and adopting principles of content analysis, we circulated inside the interpretation of the data constructed from the analysis of documents, from the subject speeches, from records of the observation made in the classroom and other notes of the field daily. In the search for pictures on the school contents, adapted to the tactile seizure of blind student, was seen little and not systematic in practice and teaching at the school. It showed us the itinerary of the student life marked by a succession of supports, most of the time inappropriate and pioneers in cooling the construction of her autonomy. It also showed us the tensions and contradictions of a school environment, supposedly inclusive, that stumbles in search of its intent, in the attitudinal and cumulative barriers brought, because of its aggravating maintenance. These findings arose of crossing data around of a categorization that gives importance to 1) Concepts regarding the school inclusion, 2) Elements of the school organization, educational proposal and teaching practice, 3) Meaning of the visual image as the object of knowledge, 4) Perception in multiple senses and 5) Development and learning of the blind person before impositions of the social environment. In light of these findings we infer that it must be guaranteed to the disabled person removal of the attitudinal barriers that are against his full development and the construction of his autonomy. In that sense, should be given opportunity to the student with visual disability, similarly to all students, not only access to school, but also the dynamics of a school life efficient, that means the seizure of knowledge in all its modalities, including the imagery. To that end, there is a need of the continued training of teachers, construction of a support network in response to all needs of students, and the opportunity to development of reading skills beyond a perspective eminently focused in the sight

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a novel stereo-to-multiview video conversion method for glasses-free multiview displays. Different from previous stereo-to-multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene's artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two-step mapping algorithm, where we (i) compress the scene depth using a non-linear global function to the depth range of an autostereoscopic display and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a signal processing approach using discrete wavelet transform (DWT) for the generation of complex synthetic aperture radar (SAR) images at an arbitrary number of dyadic scales of resolution. The method is computationally efficient and is free from significant system-imposed limitations present in traditional subaperture-based multiresolution image formation. Problems due to aliasing associated with biorthogonal decomposition of the complex signals are addressed. The lifting scheme of DWT is adapted to handle complex signal approximations and employed to further enhance the computational efficiency. Multiresolution SAR images formed by the proposed method are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconstruction of an image from a set of projections has been adapted to generate multidimensional nuclear magnetic resonance (NMR) spectra, which have discrete features that are relatively sparsely distributed in space. For this reason, a reliable reconstruction can be made from a small number of projections. This new concept is called Projection Reconstruction NMR (PR-NMR). In this paper, multidimensional NMR spectra are reconstructed by Reversible Jump Markov Chain Monte Carlo (RJMCMC). This statistical method generates samples under the assumption that each peak consists of a small number of parameters: position of peak centres, peak amplitude, and peak width. In order to find the number of peaks and shape, RJMCMC has several moves: birth, death, merge, split, and invariant updating. The reconstruction schemes are tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesis se centra en la Visión por Computador y, más concretamente, en la segmentación de imágenes, la cual es una de las etapas básicas en el análisis de imágenes y consiste en la división de la imagen en un conjunto de regiones visualmente distintas y uniformes considerando su intensidad, color o textura. Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional. La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen. Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen. La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Physiological respiratory motion of tumors growing in the lung can be corrected with respiratory gating when treated with radiotherapy (RT). The optimal respiratory phase for beam-on may be assessed with a respiratory phase optimizer (RPO), a 4D image processing software developed with this purpose. Methods and Materials Fourteen patients with lung cancer were included in the study. Every patient underwent a 4D-CT providing ten datasets of ten phases of the respiratory cycle (0-100% of the cycle). We defined two morphological parameters for comparison of 4D-CT images in different respiratory phases: tumor-volume to lung-volume ratio and tumor-to-spinal cord distance. The RPO automatized the calculations (200 per patient) of these parameters for each phase of the respiratory cycle allowing to determine the optimal interval for RT. Results Lower lobe lung tumors not attached to the diaphragm presented with the largest motion with breathing. Maximum inspiration was considered the optimal phase for treatment in 4 patients (28.6%). In 7 patients (50%), however, the RPO showed a most favorable volumetric and spatial configuration in phases other than maximum inspiration. In 2 cases (14.4%) the RPO showed no benefit from gating. This tool was not conclusive in only one case. Conclusions The RPO software presented in this study can help to determine the optimal respiratory phase for gated RT based on a few simple morphological parameters. Easy to apply in daily routine, it may be a useful tool for selecting patients who might benefit from breathing adapted RT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2D-3D registration of pre-operative 3D volumetric data with a series of calibrated and undistorted intra-operative 2D projection images has shown great potential in CT-based surgical navigation because it obviates the invasive procedure of the conventional registration methods. In this study, a recently introduced spline-based multi-resolution 2D-3D image registration algorithm has been adapted together with a novel least-squares normalized pattern intensity (LSNPI) similarity measure for image guided minimally invasive spine surgery. A phantom and a cadaver together with their respective ground truths were specially designed to experimentally assess possible factors that may affect the robustness, accuracy, or efficiency of the registration. Our experiments have shown that it is feasible for the assessed 2D-3D registration algorithm to achieve sub-millimeter accuracy in a realistic setup in less than one minute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Patient-to-image registration is a core process of image-guided surgery (IGS) systems. We present a novel registration approach for application in laparoscopic liver surgery, which reconstructs in real time an intraoperative volume of the underlying intrahepatic vessels through an ultrasound (US) sweep process. METHODS An existing IGS system for an open liver procedure was adapted, with suitable instrument tracking for laparoscopic equipment. Registration accuracy was evaluated on a realistic phantom by computing the target registration error (TRE) for 5 intrahepatic tumors. The registration work flow was evaluated by computing the time required for performing the registration. Additionally, a scheme for intraoperative accuracy assessment by visual overlay of the US image with preoperative image data was evaluated. RESULTS The proposed registration method achieved an average TRE of 7.2 mm in the left lobe and 9.7 mm in the right lobe. The average time required for performing the registration was 12 minutes. A positive correlation was found between the intraoperative accuracy assessment and the obtained TREs. CONCLUSIONS The registration accuracy of the proposed method is adequate for laparoscopic intrahepatic tumor targeting. The presented approach is feasible and fast and may, therefore, not be disruptive to the current surgical work flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.